6 research outputs found

    A new Leach protocol based on ICH-Leach for adaptive image transferring using DWT

    Get PDF
    Nowadays, the rapid development and the miniaturization of CMOS image sensors in the last years allowed the creation of WMSN (Wireless Multimedia Sensor Network). Therefore, transferring images through the network becomes an important field of research. The main goal is to transmit the data from a sensor node to another to reach finally the Sink node. Thus, routing protocols play an important role in managing and optimizing the node's resources particularly the energy consumption. Therefore, some routing protocols are known by their ability to save energy and extend the lifetime of the network such as Leach protocol and derived variants. However, multimedia content transferring was not a priority for these protocols. In this paper, the main idea is to adapt ICH-Leach [1], for image transferring. This protocol was tested in a previous work and has given good performances against Leach [2], balanced Leach [3] and MLD-Leach [4]. In fact, the Haar wavelet transform is used, in the application layer, to extract the resolution level to be used in image transmission, depending on the rate flow between the sensor node and the sink node. This paper provides statistics concerning the lifetime of the network, the energy consumption and statistics related to receive images using the peak signal-to-noise ratio (PSNR). The Castalia framework is used for simulation, obtained results show the efficiency of the proposed approach by network lifetime extending and more images transmitting with better quality compared to other protocols

    Leach routing protocol for image transfer using Castalia simulator

    Get PDF
    International audienceIn multimedia wireless sensor network, a routing protocol plays an important role in saving the limited resources of sensors. It allows a node to transmit a multimedia content, an image in our case, to the sink. In this paper, we are going to use the Castralia simulator to test different configurations of leach routing protocol. The aim of these tests is to determine how we can transmit the biggest number of images through network with minimum energy consumption

    Visualization of hyperspectral images on parallel and distributed platform: Apache Spark

    Get PDF
    The field of hyperspectral image storage and processing has undergone a remarkable evolution in recent years. The visualization of these images represents a challenge as the number of bands exceeds three bands, since direct visualization using the trivial system red, green and blue (RGB) or hue, saturation and lightness (HSL) is not feasible. One potential solution to resolve this problem is the reduction of the dimensionality of the image to three dimensions and thereafter assigning each dimension to a color. Conventional tools and algorithms have become incapable of producing results within a reasonable time. In this paper, we present a new distributed method of visualization of hyperspectral image based on the principal component analysis (PCA) and implemented in a distributed parallel environment (Apache Spark). The visualization of the big hyperspectral images with the proposed method is made in a smaller time and with the same performance as the classical method of visualization

    Spectral Classification of a Set of Hyperspectral Images using the Convolutional Neural Network, in a Single Training

    Get PDF
    International audienceHyperspectral imagery has seen a great evolution in recent years. Consequently, several fields (medical, agriculture, geosciences) need to make the automatic classification of these hyperspectral images with a high rate and in an acceptable time. The state-of-the-art presents several classification algorithms based on the Convolutional Neural Network (CNN) and each algorithm is training on a part of an image and then performs the prediction on the rest. This article proposes a new Fast Spectral classification algorithm based on CNN, and which allows to build a composite image from multiple hyperspectral images, then trains the model only once on the composite image. After training, the model can predict each image separately. To test the validity of the proposed algorithm, two free hyperspectral images are taken, and the training time obtained by the proposed model on the composite image is better than the time obtained from the model of the state-of-the-art

    Arabic Text Summarization Challenges using Deep Learning Techniques: A Review

    Get PDF
    Text summarization is a challenging field in Natural Language Processing due to language modelisation and used techniques to give concise summaries.  Dealing with Arabic language does increase the challenge while taking into consideration the many features of the Arabic language, the lack of tools and resources for Arabic, and the Algorithms adaptation and modelisation. In this paper, we present several researches dealing with Arabic Text summarization applying different Algorithms on several Datasets. We then compare all these researches and we give a conclusion to guide researchers on their further work

    Convolutional Neural Networks in Predicting Missing Text in Arabic

    No full text
    International audienceMissing text prediction is one of the major concerns of Natural Language Processing deep learning community's attention. However, the majority of text prediction related research is performed in other languages but not Arabic. In this paper, we take a first step in training a deep learning language model on Arabic language. Our contribution is the prediction of missing text from text documents while applying Convolutional Neural Networks (CNN) on Arabic Language Models. We have built CNN-based Language Models responding to specific settings in relation with Arabic language. We have prepared our dataset of a large quantity of text documents freely downloaded from Arab World Books, Hindawi foundation, and Shamela datasets. To calculate the accuracy of prediction, we have compared documents with complete text and same documents with missing text. We realized training, validation and test steps at three different stages aiming to increase the performance of prediction. The model had been trained at first stage on documents of the same author, then at the second stage, it had been trained on documents of the same dataset, and finally, at the third stage, the model had been trained on all document confused. Steps of training, validation and test have been repeated many times by changing each time the author, dataset, and the combination author-dataset, respectively. Also we have used the technique of enlarging training data by feeding the CNN-model each time by a larger quantity of text. The model gave a high performance of Arabic text prediction using Convolutional Neural Networks with an accuracy that have reached 97.8% in best case
    corecore